150 research outputs found

    Search and Delivery of Standardized Learning Resources Based on SOAP Messaging and Native XML Databases

    Get PDF
    With the progress of Web-based learning technologies, standardized digital repositories and learning management systems are becoming prevalent over time. The heterogeneity in underlying databases and access methods, however, makes it difficult to share and exchange the learning resources between them. In this paper, we propose an architecture for the search and delivery of learning resources. Based on the SOAP transmission protocol, the architecture seeks to improve interoperability between heterogeneous E-Learning implementations. We also present a general-purpose query language as a building block of the architecture. The language provides a unified query interface for resource repositories, thereby shielding the users from the differences in underlying databases and metadata schemas. To highlight our design, an implementation using LOM, native XML database and XPath is presented. The last part of this paper discusses technical and pedagogical issues of concern regarding the launching of contents from within standardized LMSs

    User experience for multi-device ecosystems: challenges and opportunities

    Get PDF
    Smart devices have pervaded every aspect of humans' daily lives. Though single device UX products are relatively successful, the experience of cross-device interaction is still far from satisfactory and can be a source of frustration. Inconsistent UI styles, unclear coordination, varying fidelity, pairwise interactions, lack of understanding intent, limited data sharing and security, and other problems typically degrade the experience in a multi-device ecosystem. Redesigning the UX, tailored to multi-device ecosystems to enhance the user experience, turns out to be challenging but at the same time affording many new opportunities. This workshop brings together researchers, practitioners and developers with different backgrounds, including from fields such as computationally design, affective computing, and multimodal interaction to exchange views, share ideas, and explore future directions on UX for distributed scenarios, especially for those heterogeneous cross-device ecosystems. The topics cover but are not limited to distributed UX design, accessibility, cross-device HCI, human factors in distributed scenarios, user-centric interfaces, and multi-device ecosystems

    Facilitating Self-monitored Physical Rehabilitation with Virtual Reality and Haptic feedback

    Full text link
    Physical rehabilitation is essential to recovery from joint replacement operations. As a representation, total knee arthroplasty (TKA) requires patients to conduct intensive physical exercises to regain the knee's range of motion and muscle strength. However, current joint replacement physical rehabilitation methods rely highly on therapists for supervision, and existing computer-assisted systems lack consideration for enabling self-monitoring, making at-home physical rehabilitation difficult. In this paper, we investigated design recommendations that would enable self-monitored rehabilitation through clinical observations and focus group interviews with doctors and therapists. With this knowledge, we further explored Virtual Reality(VR)-based visual presentation and supplemental haptic motion guidance features in our implementation VReHab, a self-monitored and multimodal physical rehabilitation system with VR and vibrotactile and pneumatic feedback in a TKA rehabilitation context. We found that the third point of view real-time reconstructed motion on a virtual avatar overlaid with the target pose effectively provides motion awareness and guidance while haptic feedback helps enhance users' motion accuracy and stability. Finally, we implemented \systemname to facilitate self-monitored post-operative exercises and validated its effectiveness through a clinical study with 10 patients

    UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language

    Full text link
    We introduce UbiPhysio, a milestone framework that delivers fine-grained action description and feedback in natural language to support people's daily functioning, fitness, and rehabilitation activities. This expert-like capability assists users in properly executing actions and maintaining engagement in remote fitness and rehabilitation programs. Specifically, the proposed UbiPhysio framework comprises a fine-grained action descriptor and a knowledge retrieval-enhanced feedback module. The action descriptor translates action data, represented by a set of biomechanical movement features we designed based on clinical priors, into textual descriptions of action types and potential movement patterns. Building on physiotherapeutic domain knowledge, the feedback module provides clear and engaging expert feedback. We evaluated UbiPhysio's performance through extensive experiments with data from 104 diverse participants, collected in a home-like setting during 25 types of everyday activities and exercises. We assessed the quality of the language output under different tuning strategies using standard benchmarks. We conducted a user study to gather insights from clinical experts and potential users on our framework. Our initial tests show promise for deploying UbiPhysio in real-life settings without specialized devices.Comment: 27 pages, 14 figures, 5 table

    MindShift: Leveraging Large Language Models for Mental-States-Based Problematic Smartphone Use Intervention

    Full text link
    Problematic smartphone use negatively affects physical and mental health. Despite the wide range of prior research, existing persuasive techniques are not flexible enough to provide dynamic persuasion content based on users' physical contexts and mental states. We first conduct a Wizard-of-Oz study (N=12) and an interview study (N=10) to summarize the mental states behind problematic smartphone use: boredom, stress, and inertia. This informs our design of four persuasion strategies: understanding, comforting, evoking, and scaffolding habits. We leverage large language models (LLMs) to enable the automatic and dynamic generation of effective persuasion content. We develop MindShift, a novel LLM-powered problematic smartphone use intervention technique. MindShift takes users' in-the-moment physical contexts, mental states, app usage behaviors, users' goals & habits as input, and generates high-quality and flexible persuasive content with appropriate persuasion strategies. We conduct a 5-week field experiment (N=25) to compare MindShift with baseline techniques. The results show that MindShift significantly improves intervention acceptance rates by 17.8-22.5% and reduces smartphone use frequency by 12.1-14.4%. Moreover, users have a significant drop in smartphone addiction scale scores and a rise in self-efficacy. Our study sheds light on the potential of leveraging LLMs for context-aware persuasion in other behavior change domains
    corecore